AttDLNet: Attention-Based Deep Network for 3D LiDAR Place Recognition

نویسندگان

چکیده

Place recognition has often been incorporated in SLAM and localization systems to support autonomous navigation of robots intelligent vehicles. With the increasing capacity DL approaches learning useful information from 3D LiDARs, place also benefited this modality, which led higher re-localization loop-closure detection performance, particularly, environments with significantly changing conditions. Despite progress field, efficient extraction invariant descriptors LiDAR data is still a challenging problem domain. In work, we propose novel LiDAR-based deep network that resorts self-attention mechanism to, on one hand, leverage computational efficiency these operations and, other, reweigh relevant local features thus create discriminative descriptors. The proposed trained validated KITTI dataset an ablation study presented assess components network. Results show adding attention improves leading loop closures, outperforming established approach. From study, results indicate middle encoder layers have highest mean while deeper are more robust orientation change. code publicly available at: https://github.com/Cybonic/AttDLNet

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Real-Time Lidar-Based Place Recognition Using Distinctive Shape Descriptors

A key component in the emerging localization and mapping paradigm is an appearance-based place recognition algorithm that detects when a place has been revisited. This algorithm can run in the background at a low frame rate and be used to signal a global geometric mapping algorithm when a loop is detected. An optimization technique can then be used to correct the map by ‘closing the loop’. This...

متن کامل

Joint Network based Attention for Action Recognition

By extracting spatial and temporal characteristics in one network, the two-stream ConvNets can achieve the state-ofthe-art performance in action recognition. However, such a framework typically suffers from the separately processing of spatial and temporal information between the two standalone streams and is hard to capture long-term temporal dependence of an action. More importantly, it is in...

متن کامل

Convolutional Neural Network-based Place Recognition

Recently Convolutional Neural Networks (CNNs) have been shown to achieve state-of-the-art performance on various classification tasks. In this paper, we present for the first time a place recognition technique based on CNN models, by combining the powerful features learnt by CNNs with a spatial and sequential filter. Applying the system to a 70 km benchmark place recognition dataset we achieve ...

متن کامل

Anomaly-based Web Attack Detection: The Application of Deep Neural Network Seq2Seq With Attention Mechanism

Today, the use of the Internet and Internet sites has been an integrated part of the people’s lives, and most activities and important data are in the Internet websites. Thus, attempts to intrude into these websites have grown exponentially. Intrusion detection systems (IDS) of web attacks are an approach to protect users. But, these systems are suffering from such drawbacks as low accuracy in ...

متن کامل

3D-based Deep Convolutional Neural Network for action recognition with depth sequences

Traditional algorithms to design hand-crafted features for action recognition have been a hot research area in last decade. Compared to RGB video, depth sequence is more insensitive to lighting changes and more discriminative due to its capability to catch geometric information of object. Unlike many existing methods for action recognition which depend on well-designed features, this paper stud...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture notes in networks and systems

سال: 2022

ISSN: ['2367-3370', '2367-3389']

DOI: https://doi.org/10.1007/978-3-031-21065-5_26